Goto

Collaborating Authors

 ncc group


Whitepaper – Practical Attacks On Machine Learning Systems - AI Summary

#artificialintelligence

Written by Chris Anley, Chief Scientist, NCC Group This paper collects a set of notes and research projects conducted by NCC Group on the topic of the security of Machine Learning (ML) systems. The objective is to provide some industry perspective to the academic community, while collating helpful references for security practitioners, to enable more effective security auditing and security-focused code review of ML systems. Details of specific practical attacks and common security problems are described. Some general background information on the broader subject of ML is also included, mostly for context, to ensure that explanations of attack scenarios are clear, and some notes on frameworks and development processes are provided. This paper collects a set of notes and research projects conducted by NCC Group on the topic of the security of Machine Learning (ML) systems.


Machine Learning Systems Vulnerable to Specific Attacks

#artificialintelligence

The growing number of organizations creating and deploying machine learning solutions raises concerns as to their intrinsic security, argues the NCC Group in a recent whitepaper. The NCC Group's whitepaper provides a classification of attacks that may be carried through against machine learning systems, including examples based on popular libraries such as SciKit-Learn, Keras, PyTorch and TensorFlow platforms. Although the various mechanisms that allow this are to some extent documented, we contend that the security implications of this behaviour are not well-understood in the broader ML community. According to the NCC Groups, ML systems are subject to specific forms of attacks in addition to more traditional attacks that may attempt to exploit infrastructure or applications bugs, or other kind of issues. A first vector of risk is associated to the fact that many ML models contain code that is executed when the model is loaded or when a particular condition is met, such as a given output class is predicted.


New Research Points to Hidden Vulnerabilities Within Machine Learning Systems

#artificialintelligence

Government agencies collect a lot of data, and have access to even more of it in their archives. The trick has always been trying to tap into that store of information to improve decision-making, which is a major focus in government these days. The President's Management Agenda, for example, emphasizes the importance of data-driven decision-making to improve federal services. The volume of data that most agencies are working with is such that humans can't easily tap into it for help with that decision-making. And even if they can perform searches into that data, the process is slow.


Whitepaper – Practical Attacks on Machine Learning Systems

#artificialintelligence

This paper collects a set of notes and research projects conducted by NCC Group on the topic of the security of Machine Learning (ML) systems. The objective is to provide some industry perspective to the academic community, while collating helpful references for security practitioners, to enable more effective security auditing and security-focused code review of ML systems. Details of specific practical attacks and common security problems are described. Some general background information on the broader subject of ML is also included, mostly for context, to ensure that explanations of attack scenarios are clear, and some notes on frameworks and development processes are provided.